Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
In this work, we propose an ID-preserving talking head generation framework, which advances previous methods in two aspects. First, as opposed to interpolating from sparse flow, we claim that dense landmarks are crucial to achieving accurate geometry-aware flow fields. Second, inspired by face-swapping methods, we adaptively fuse the source identity during synthesis, so that the network better preserves the key characteristics of the image portrait. Although the proposed model surpasses prior generation fidelity on established benchmarks, to further make the talking head generation qualified for real usage, personalized fine-tuning is usually needed. However, this process is rather computationally demanding that is unaffordable to standard users. To solve this, we propose a fast adaptation model using a meta-learning approach. The learned model can be adapted to a high-quality personalized model as fast as 30 seconds. Last but not the least, a spatial-temporal enhancement module is proposed to improve the fine details while ensuring temporal coherency. Extensive experiments prove the significant superiority of our approach over the state of the arts in both one-shot and personalized settings.
translated by 谷歌翻译
This paper presents a 3D generative model that uses diffusion models to automatically generate 3D digital avatars represented as neural radiance fields. A significant challenge in generating such avatars is that the memory and processing costs in 3D are prohibitive for producing the rich details required for high-quality avatars. To tackle this problem we propose the roll-out diffusion network (Rodin), which represents a neural radiance field as multiple 2D feature maps and rolls out these maps into a single 2D feature plane within which we perform 3D-aware diffusion. The Rodin model brings the much-needed computational efficiency while preserving the integrity of diffusion in 3D by using 3D-aware convolution that attends to projected features in the 2D feature plane according to their original relationship in 3D. We also use latent conditioning to orchestrate the feature generation for global coherence, leading to high-fidelity avatars and enabling their semantic editing based on text prompts. Finally, we use hierarchical synthesis to further enhance details. The 3D avatars generated by our model compare favorably with those produced by existing generative techniques. We can generate highly detailed avatars with realistic hairstyles and facial hair like beards. We also demonstrate 3D avatar generation from image or text as well as text-guided editability.
translated by 谷歌翻译
Supervised learning aims to train a classifier under the assumption that training and test data are from the same distribution. To ease the above assumption, researchers have studied a more realistic setting: out-of-distribution (OOD) detection, where test data may come from classes that are unknown during training (i.e., OOD data). Due to the unavailability and diversity of OOD data, good generalization ability is crucial for effective OOD detection algorithms. To study the generalization of OOD detection, in this paper, we investigate the probably approximately correct (PAC) learning theory of OOD detection, which is proposed by researchers as an open problem. First, we find a necessary condition for the learnability of OOD detection. Then, using this condition, we prove several impossibility theorems for the learnability of OOD detection under some scenarios. Although the impossibility theorems are frustrating, we find that some conditions of these impossibility theorems may not hold in some practical scenarios. Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios. Lastly, we also offer theoretical supports for several representative OOD detection works based on our OOD theory.
translated by 谷歌翻译
与传统的头像创建管道相反,这是一个昂贵的过程,现代生成方法直接从照片中学习数据分布,而艺术的状态现在可以产生高度的照片现实图像。尽管大量作品试图扩展无条件的生成模型并达到一定程度的可控性,但要确保多视图一致性,尤其是在大型姿势中,仍然具有挑战性。在这项工作中,我们提出了一个3D肖像生成网络,该网络可产生3D一致的肖像,同时根据有关姿势,身份,表达和照明的语义参数可控。生成网络使用神经场景表示在3D中建模肖像,其生成以支持明确控制的参数面模型为指导。尽管可以通过将图像与部分不同的属性进行对比,但可以进一步增强潜在的分离,但在非面积区域(例如,在动画表达式)时,仍然存在明显的不一致。我们通过提出一种体积混合策略来解决此问题,在该策略中,我们通过将动态和静态辐射场融合在一起,形成一个复合输出,并从共同学习的语义场中分割了两个部分。我们的方法在广泛的实验中优于先前的艺术,在自由视点中观看时,在自然照明中产生了逼真的肖像。所提出的方法还证明了真实图像以及室外卡通面孔的概括能力,在实际应用中显示出巨大的希望。其他视频结果和代码将在项目网页上提供。
translated by 谷歌翻译
本文研究了控制多机器人系统以自组织方式实现多边形形成的问题。与典型的形成控制策略不同,在该策略中,机器人被转向以满足预定义的控制变量,例如成对距离,相对位置和轴承,本文的最重要思想是通过将控制输入随机输入到一些机器人(说说)(说说) ,组的顶点机器人),其余的遵循的简单原理是向环形图中的两个最近邻居的中点移动,而没有任何外部输入。在我们的问题中,机器人最初分布在飞机上。 Sopalled Vertex机器人负责确定整个编队的几何形状及其整体大小,而其他人则移动,以最大程度地减少两个直接邻居的差异。在第一步中,每个顶点机器人估计其相关链中机器人的数量。用于估计的两种类型的控制输入是使用最新和最后两次瞬间的测量设计设计的。在第二步中,提出了自组织的形成控制法,只有顶点机器人收到外部信息。两种估计策略之间的比较是根据收敛速度和稳健性进行的。在模拟和物理实验中,整个控制框架的有效性得到了进一步验证。
translated by 谷歌翻译
基于方面的情感分析(ABSA)旨在预测对给定方面表达的情感极性(SC)或提取意见跨度(OE)。 ABSA的先前工作主要依赖于相当复杂的特定方面特征诱导。最近,审计的语言模型(PLM),例如伯特(Bert)已被用作上下文建模层,以简化特征感应结构并实现最新性能。但是,这种基于PLM的上下文建模可能不是特定于方面的。因此,一个关键问题的探索还不足:如何通过PLM更好地建模特定方面的上下文?为了回答这个问题,我们试图以非侵入性的方式通过PLM增强特定方面的上下文建模。我们提出了三个特定于方面的输入转换,即伴侣,方面提示和方面标记。通过这些转变,可以实现非侵入性方面的PLM,以促进PLM,以便更多地关注句子中特定方面的环境。此外,我们为ABSA(ADVABSA)制定了对抗性基准,以查看特定于方面的建模如何影响模型的鲁棒性。 SC和OE的标准和对抗性基准的广泛实验结果证明了该方法的有效性和鲁棒性,从而在OE上产生了新的最新性能和SC上的竞争性能。
translated by 谷歌翻译
实时投标(RTB)是现代在线广告系统中的重要机制。广告商在RTB中采用投标策略来优化其广告效果,但根据各种财务要求,其中广泛采用的是投资回报(ROI)约束。在顺序招标过程中,ROI在非单调的情况下变化,通常在约束满意度和客观优化之间具有透视作用。通常在静态或轻微变化的市场中建立了约束 - 目标权衡解决方案。但是,由于无法适应不同的动态和部分可观察性,这些方法在非平稳广告市场中大大失败。在这项工作中,我们专门研究非机构市场的ROI限制招标。基于部分可观察到的马尔可夫决策过程,我们提出了第一个容纳非单调约束的硬屏障解决方案。我们的方法利用了无参数指标的奖励功能,并开发了课程指导的贝叶斯强化学习(CBRL)框架,以适应在非平稳广告市场中的约束目标权衡。在具有两个问题设置的大规模工业数据集上进行的广泛实验表明,CBRL在分布和分发数据制度方面都很好地概括了,并且具有出色的稳定性。
translated by 谷歌翻译
尽管在广泛的愿景任务中取得了诱人的成功,但变形金刚尚未在高分辨率图像生成建模中作为Convnets的讨论能力。在本文中,我们寻求探索使用纯变压器来构建用于高分辨率图像合成的生成对抗网络。为此,我们认为,当地的关注是在计算效率和建模能力之间取得平衡至关重要。因此,所提出的发电机采用基于风格的架构中的Swin变压器。为了实现更大的接收领域,我们提出了双重关注,同时利用本地和移位窗的上下文,从而提高了发电质量。此外,我们表明提供了在基于窗口的变压器中丢失的绝对位置的知识极大地利益了代理。所提出的STYLESWIN可扩展到高分辨率,粗糙几何和细结构都受益于变压器的强效力。然而,在高分辨率合成期间发生阻塞伪像,因为以块明智的方式执行局部注意力可能会破坏空间一致性。为了解决这一点,我们经验研究了各种解决方案,其中我们发现采用小波鉴别器来检查光谱差异的措施有效地抑制伪影。广泛的实验表明了对现有的基于变压器的GAN的优越性,特别是在高分辨率上,例如高分辨率,例如1024x1024。如果没有复杂的培训策略,则在Celeba-HQ 1024上赢得了STYLEGAN,并且在FFHQ-1024上实现了对PAR的表现,证明了使用变压器进行高分辨率图像生成的承诺。代码和模型将在https://github.com/microsoft/styleswin上使用。
translated by 谷歌翻译
交通预测在智能交通系统中很重要,有利于交通安全,但由于现实世界交通系统中的复杂和动态的时空依赖性,这是非常具有挑战性的。先前的方法使用预定义或学习的静态图来提取空间相关性。但是,基于静态图形的方法无法挖掘交通网络的演变。研究人员随后为每次切片生成动态图形以反映空间相关性的变化,但它们遵循独立建模的时空依赖性的范例,忽略了串行空间影响。在本文中,我们提出了一种新的基于跨时动态图形的深度学习模型,名为CDGNet,用于交通预测。该模型能够通过利用横行动态图来有效地捕获每个时切片和其历史时片之间的串联空间依赖性。同时,我们设计了稀疏横行动态图的浇注机制,符合现实世界中的稀疏空间相关性。此外,我们提出了一种新颖的编码器解码器架构,用于结合基于交叉时间动态图形的GCN,用于多步行量预测。三个现实世界公共交通数据集的实验结果表明CDGNET优于最先进的基线。我们还提供了一种定性研究来分析我们建筑的有效性。
translated by 谷歌翻译